836 research outputs found

    GPT-2: Girl Detective Analyzing AI-Generated Nancy Drew with Stylometry

    Get PDF
    Writing styles are often viewed as unique to their writers–a compositional fingerprint of sorts. An analytical tool based upon this assumption is stylometry: the statistical analysis of the variations in the literary styles of works, often used to determine the most likely author of a particular work. Stylometric techniques abound in a multitude of fields, including history, literary studies, and even courts of law. Stylometry is often used as a form of evidence as to the identities of authors of written material pertaining to legal cases, a famous example being the conviction of the Unabomber based upon stylistic similarities between his earlier essays and his famous manuscript [1]. Thus, stylometric techniques are ascribed a lot of power. But, what if stylometry isn’t as dependable as it is assumed to be? What if a writer’s so-called “unique” style can be easily imitated to fool stylometric tools? In this project, we aim to analyze the ability of AI to generate text stylometrically consistent with the writer upon whom it was trained

    Fine-tuning Daria: Exploring the Implications of Temperature, Epochs, & Corpus Size on GPT-2 Screenplay Generation

    Get PDF
    GPT-2 has a number of hyper parameters that can be tuned to adjust the generated text, including temperature, number of epochs, size of the training text corpus, batch size, and number of samples generated. Since GPT-2 is a new AI model, the effects of fine-tuning are still relatively unknown and unexplored. This paper summits experiments adjusting these GPT-2 parameters and how they affect the text output that the model generates

    Impact of Interpreters Filling Multiple Roles in Mainstream Classrooms on Communication Access for Deaf Students

    Get PDF
    Educational interpreters nationwide fill a variety of roles in their schools, including interpreter, tutor, assistant, consultant, and others, and the impact of these roles on the interpretation of classroom discourse is uncertain. In order to provide deaf students with the free appropriate public education they are promised through the Individuals with Disabilities Education Act, we need to know more about the roles educational interpreters are filling and their impact on a deaf student’s access to the classroom discourse. This study was a quantitative study using naturalistic observation of a high school classroom with a deaf student and an interpreter, augmented with qualitative data from interviews with the interpreter, deaf student, and teacher participants. In examining the different roles filled during the class observed, the interpreter in this study filled the interpreter role during only 41.41% of the intervals analyzed. In all, 35.68% of the intervals were interpreted while 39.78% of the teacher’s discourse was not interpreted. Less than 20% of the teachers’ discourse was interpreted while in any role other than interpreter. During the days observed, the interpreter in this study spent more time tutoring rather than interpreting the classroom discourse even though she was not required to do any tutoring. In this study, communication access seems to have been impacted by the interpreter filling multiple roles in the classroom, particularly the tutor role. Knowing the importance of social communication in language development, and thus cognitive development, the roles interpreters fill in the classroom, as well as the placement of the deaf student in an inclusion class, should be carefully examined

    Effects of Handedness and Viewpoint on the Imitation of Origami-Making

    Get PDF
    The evolutionary origins of the human bias for 85% right-handedness are obscure. The Apprenticeship Complexity Theory states that the increasing difficulty of acquiring stone tool-making and other manual skills in the Pleistocene favoured learners whose hand preference matched that of their teachers. Furthermore, learning from a viewing position opposite, rather than beside, the demonstrator might be harder because it requires more mental transformation. We varied handedness and viewpoint in a bimanual learning task. Thirty-two participants reproduced folding asymmetric origami figures as demonstrated by a videotaped teacher in four conditions (left-handed teacher opposite the learner, left-handed beside, right-handed opposite, or right-handed beside). Learning performance was measured by time to complete each figure, number of video pauses and rewinds, and similarity of copies to the target shape. There was no effect of handedness or viewpoint on imitation learning. However, participants preferred to produce figures with the same asymmetry as demonstrated, indicating they imitate the teacher's hand preference. We speculate that learning by imitation involves internalising motor representations and that, to facilitate learning by imitation, many motor actions can be flexibly executed using the demonstrated hand configuration. We conclude that matching hand preferences evolved due to socially learning moderately complex bimanual skills

    Justifying Juneteenth: A Critical Pairing of Two Children\u27s Texts

    Get PDF

    Tris(2,4,6-trifluorophenyl)borane: an efficient hydroboration catalyst

    Get PDF
    The metal-free catalyst tris(2,4,6-trifluorophenyl)borane has demonstrated its extensive applications in the 1,2-hydroboration of numerous unsaturated reagents, namely alkynes, aldehydes and imines, consisting of a wide array of electron-withdrawing and donating functionalities. A range of over 50 borylated products are reported, with many reactions proceeding with low catalyst loading under ambient conditions. These pinacol boronate esters, in the case of aldehydes and imines, can be readily hydrolyzed to leave the respective alcohol and amine, whereas alkynyl substrates result in vinyl boranes. This is of great synthetic use to the organic chemist

    Time course of information processing in visual and haptic object classification

    Get PDF
    Vision identifies objects rapidly and efficiently. In contrast, object recognition by touch is much slower. Furthermore, haptics usually serially accumulates information from different parts of objects, whereas vision typically processes object information in parallel. Is haptic object identification slower simply due to sequential information acquisition and the resulting memory load or due to more fundamental processing differences between the senses? To compare the time course of visual and haptic object recognition, we slowed visual processing using a novel, restricted viewing technique. In an electroencephalographic (EEG) experiment, participants discriminated familiar, nameable from unfamiliar, unnamable objects both visually and haptically. Analyses focused on the evoked and total fronto-central theta-band (5–7 Hz; a marker of working memory) and the occipital upper alpha-band (10–12 Hz; a marker of perceptual processing) locked to the onset of classification. Decreases in total upper alpha-band activity for haptic identification of objects indicate a likely processing role of multisensory extrastriate areas. Long-latency modulations of alpha-band activity differentiated between familiar and unfamiliar objects in haptics but not in vision. In contrast, theta-band activity showed a general increase over time for the slowed-down visual recognition task only. We conclude that haptic object recognition relies on common representations with vision but also that there are fundamental differences between the senses that do not merely arise from differences in their speed of processing

    The "where" of social attention: Head and body direction aftereffects arise from representations specific to cue type and not direction alone.

    Get PDF
    Human beings have remarkable social attention skills. From the initial processing of cues, such as eye gaze, head direction, and body orientation, we perceive where other people are attending, allowing us to draw inferences about the intentions, desires, and dispositions of others. But before we can infer why someone is attending to something in the world we must first accurately represent where they are attending. Here we investigate the "where" of social attention perception, and employ adaptation paradigms to ascertain how head and body orientation are visually represented in the human brain. Across two experiments we show that the representation of two cues to social attention (head and body orientation) exists at the category-specific level. This suggests that aftereffects do not arise from "social attention cells" discovered in macaques or from abstract representations of "leftness" or "rightness.
    corecore